64 research outputs found
MFA-DVR: Direct Volume Rendering of MFA Models
3D volume rendering is widely used to reveal insightful intrinsic patterns of
volumetric datasets across many domains. However, the complex structures and
varying scales of volumetric data can make efficiently generating high-quality
volume rendering results a challenging task. Multivariate functional
approximation (MFA) is a new data model that addresses some of the critical
challenges: high-order evaluation of both value and derivative anywhere in the
spatial domain, compact representation for large-scale volumetric data, and
uniform representation of both structured and unstructured data. In this paper,
we present MFA-DVR, the first direct volume rendering pipeline utilizing the
MFA model, for both structured and unstructured volumetric datasets. We
demonstrate improved rendering quality using MFA-DVR on both synthetic and real
datasets through a comparative study. We show that MFA-DVR not only generates
more faithful volume rendering than using local filters but also performs
faster on high-order interpolations on structured and unstructured datasets.
MFA-DVR is implemented in the existing volume rendering pipeline of the
Visualization Toolkit (VTK) to be accessible by the scientific visualization
community
Heterogeneous hierarchical workflow composition
Workflow systems promise scientists an automated end-to-end path from hypothesis to discovery. However, expecting any single workflow system to deliver such a wide range of capabilities is impractical. A more practical solution is to compose the end-to-end workflow from more than one system. With this goal in mind, the integration of task-based and in situ workflows is explored, where the result is a hierarchical heterogeneous workflow composed of subworkflows, with different levels of the hierarchy using different programming, execution, and data models. Materials science use cases demonstrate the advantages of such heterogeneous hierarchical workflow composition.This work is a collaboration between Argonne National Laboratory and the Barcelona Supercomputing Center within the Joint Laboratory for Extreme-Scale Computing. This research is supported by the
U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, under contract number DE-AC02-
06CH11357, program manager Laura Biven, and by the Spanish
Government (SEV2015-0493), by the Spanish Ministry of Science and Innovation (contract TIN2015-65316-P), by Generalitat de Catalunya (contract 2014-SGR-1051).Peer ReviewedPostprint (author's final draft
Adaptively Placed Multi-Grid Scene Representation Networks for Large-Scale Data Visualization
Scene representation networks (SRNs) have been recently proposed for
compression and visualization of scientific data. However, state-of-the-art
SRNs do not adapt the allocation of available network parameters to the complex
features found in scientific data, leading to a loss in reconstruction quality.
We address this shortcoming with an adaptively placed multi-grid SRN (APMGSRN)
and propose a domain decomposition training and inference technique for
accelerated parallel training on multi-GPU systems. We also release an
open-source neural volume rendering application that allows plug-and-play
rendering with any PyTorch-based SRN. Our proposed APMGSRN architecture uses
multiple spatially adaptive feature grids that learn where to be placed within
the domain to dynamically allocate more neural network resources where error is
high in the volume, improving state-of-the-art reconstruction accuracy of SRNs
for scientific data without requiring expensive octree refining, pruning, and
traversal like previous adaptive models. In our domain decomposition approach
for representing large-scale data, we train an set of APMGSRNs in parallel on
separate bricks of the volume to reduce training time while avoiding overhead
necessary for an out-of-core solution for volumes too large to fit in GPU
memory. After training, the lightweight SRNs are used for realtime neural
volume rendering in our open-source renderer, where arbitrary view angles and
transfer functions can be explored. A copy of this paper, all code, all models
used in our experiments, and all supplemental materials and videos are
available at https://github.com/skywolf829/APMGSRN.Comment: Accepted to IEEE VIS 202
Toward High-Performance Computing and Big Data Analytics Convergence: The Case of Spark-DIY
Convergence between high-performance computing (HPC) and big data analytics (BDA) is currently an established research area that has spawned new opportunities for unifying the platform layer and data abstractions in these ecosystems. This work presents an architectural model that enables the interoperability of established BDA and HPC execution models, reflecting the key design features that interest both the HPC and BDA communities, and including an abstract data collection and operational model that generates a unified interface for hybrid applications. This architecture can be implemented in different ways depending on the process- and data-centric platforms of choice and the mechanisms put in place to effectively meet the requirements of the architecture. The Spark-DIY platform is introduced in the paper as a prototype implementation of the architecture proposed. It preserves the interfaces and execution environment of the popular BDA platform Apache Spark, making it compatible with any Spark-based application and tool, while providing efficient communication and kernel execution via DIY, a powerful communication pattern library built on top of MPI. Later, Spark-DIY is analyzed in terms of performance by building a representative use case from the hydrogeology domain, EnKF-HGS. This application is a clear example of how current HPC simulations are evolving toward hybrid HPC-BDA applications, integrating HPC simulations within a BDA environment.This work was supported in part by the Spanish Ministry of Economy, Industry and Competitiveness under Grant TIN2016-79637-P(toward Unification of HPC and Big Data Paradigms), in part by the Spanish Ministry of Education under Grant FPU15/00422 TrainingProgram for Academic and Teaching Staff Grant, in part by the Advanced Scientific Computing Research, Office of Science, U.S.Department of Energy, under Contract DE-AC02-06CH11357, and in part by the DOE with under Agreement DE-DC000122495,Program Manager Laura Biven
Spark-DIY: A framework for interoperable Spark Operations with high performance Block-Based Data Models
This work was partially funded by the Spanish Ministry of Economy, Industry and Competitiveness under the grant TIN2016-79637-P ”Towards Unification of HPC and Big Data Paradigms”; the Spanish Ministry of Education under the FPU15/00422 Training Program for Academic and Teaching Staff Grant; the Advanced Scientific Computing
Research, Office of Science, U.S. Department of Energy, under Contract DE-AC02-06CH11357; and by DOE with agreement No. DE-DC000122495, program manager Laura Biven
The Universe at Extreme Scale: Multi-Petaflop Sky Simulation on the BG/Q
Remarkable observational advances have established a compelling
cross-validated model of the Universe. Yet, two key pillars of this model --
dark matter and dark energy -- remain mysterious. Sky surveys that map billions
of galaxies to explore the `Dark Universe', demand a corresponding
extreme-scale simulation capability; the HACC (Hybrid/Hardware Accelerated
Cosmology Code) framework has been designed to deliver this level of
performance now, and into the future. With its novel algorithmic structure,
HACC allows flexible tuning across diverse architectures, including accelerated
and multi-core systems.
On the IBM BG/Q, HACC attains unprecedented scalable performance -- currently
13.94 PFlops at 69.2% of peak and 90% parallel efficiency on 1,572,864 cores
with an equal number of MPI ranks, and a concurrency of 6.3 million. This level
of performance was achieved at extreme problem sizes, including a benchmark run
with more than 3.6 trillion particles, significantly larger than any
cosmological simulation yet performed.Comment: 11 pages, 11 figures, final version of paper for talk presented at
SC1
Toward Feature-Preserving Vector Field Compression
The objective of this work is to develop error-bounded lossy compression methods to preserve topological features in 2D and 3D vector fields. Specifically, we explore the preservation of critical points in piecewise linear and bilinear vector fields. We define the preservation of critical points as, without any false positive, false negative, or false type in the decompressed data, (1) keeping each critical point in its original cell and (2) retaining the type of each critical point (e.g., saddle and attracting node). The key to our method is to adapt a vertex-wise error bound for each grid point and to compress input data together with the error bound field using a modified lossy compressor. Our compression algorithm can be also embarrassingly parallelized for large data handling and in situ processing. We benchmark our method by comparing it with existing lossy compressors in terms of false positive/negative/type rates, compression ratio, and various vector field visualizations with several scientific applications
- …